Search results for "Radial Basis Function"
showing 10 items of 61 documents
Online fitted policy iteration based on extreme learning machines
2016
Reinforcement learning (RL) is a learning paradigm that can be useful in a wide variety of real-world applications. However, its applicability to complex problems remains problematic due to different causes. Particularly important among these are the high quantity of data required by the agent to learn useful policies and the poor scalability to high-dimensional problems due to the use of local approximators. This paper presents a novel RL algorithm, called online fitted policy iteration (OFPI), that steps forward in both directions. OFPI is based on a semi-batch scheme that increases the convergence speed by reusing data and enables the use of global approximators by reformulating the valu…
Neural Networks as Soft Sensors: a Comparison in a Real World Application.
2006
Physical atmosphere parameters, as temperature or humidity, can be indirectly estimated on the surface of a monument by means of soft sensors based on neural networks, if an ambient air monitoring station works in the neighborhood of the monument itself. Since the soft sensors work as virtual instruments, the accuracy of such measurements has to be analyzed and validated from statistical and metrological points of view. The paper compares different typologies of neural networks, which can be used as soft sensors in a complex real world application: a non invasive monitoring of the conservation state of old monuments. In this context, several designed connessionistic systems, based on radial…
Semi-Supervised Support Vector Biophysical Parameter Estimation
2008
Two kernel-based methods for semi-supervised regression are presented. The methods rely on building a graph or hypergraph Laplacian with both the labeled and unlabeled data, which is further used to deform the training kernel matrix. The deformed kernel is then used for support vector regression (SVR). The semi-supervised SVR methods are sucessfully tested in LAI estimation and ocean chlorophyll concentration prediction from remotely sensed images.
Regularized RBF Networks for Hyperspectral Data Classification
2004
In this paper, we analyze several regularized types of Radial Basis Function (RBF) Networks for crop classification using hyperspectral images. We compare the regularized RBF neural network with Support Vector Machines (SVM) using the RBF kernel, and AdaBoost Regularized (ABR) algorithm using RBF bases, in terms of accuracy and robustness. Several scenarios of increasing input space dimensionality are tested for six images containing six crop classes. Also, regularization, sparseness, and knowledge extraction are paid attention.
Classification of Satellite Images with Regularized AdaBoosting of RBF Neural Networks
2008
A Generalised RBF Finite Difference Approach to Solve Nonlinear Heat Conduction Problems on Unstructured Datasets
2011
Radial Basis Functions have traditionally been used to provide a continuous interpolation of scattered data sets. However, this interpolation also allows for the reconstruction of partial derivatives throughout the solution field, which can then be used to drive the solution of a partial differential equation. Since the interpolation takes place on a scattered dataset with no local connectivity, the solution is essentially meshless. RBF-based methods have been successfully used to solve a wide variety of PDEs in this fashion. Such full-domain RBF methods are highly flexible and can exhibit spectral convergence rates Madych & Nelson (1990). However, in their traditional implementation the fu…
A comparison analysis between unsymmetric and symmetric radial basis function collocation methods for the numerical solution of partial differential …
2002
Abstract In this article, we present a thorough numerical comparison between unsymmetric and symmetric radial basis function collocation methods for the numerical solution of boundary value problems for partial differential equations. A series of test examples was solved with these two schemes, different problems with different type of governing equations, and boundary conditions. Particular emphasis was paid to the ability of these schemes to solve the steady-state convection-diffusion equation at high values of the Peclet number. From the examples tested in this work, it was observed that the system of algebraic equations obtained with the symmetric method was in general simpler to solve …
BELM: Bayesian Extreme Learning Machine
2011
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap…
Multilayer perceptron neural networks and radial-basis function networks as tools to forecast accumulation of deoxynivalenol in barley seeds contamin…
2011
The capacity of multi-layer perceptron artificial neural networks (MLP-ANN) and radial-basis function networks (RBFNs) to predict deoxynivalenol (DON) accumulation in barley seeds contaminated with Fusarium culmorum under different conditions has been assessed. Temperature (20-28 °C), water activity (0.94-0.98), inoculum size (7-15 mm diameter), and time were the inputs while DON concentration was the output. The dataset was used to train, validate and test many ANNs. Minimizing the mean-square error (MSE) was used to choose the optimal network. Single-layer perceptrons with low number of hidden nodes proved better than double-layer perceptrons, but the performance depended on the training …
Optimizing Kernel Ridge Regression for Remote Sensing Problems
2018
Kernel methods have been very successful in remote sensing problems because of their ability to deal with high dimensional non-linear data. However, they are computationally expensive to train when a large amount of samples are used. In this context, while the amount of available remote sensing data has constantly increased, the size of training sets in kernel methods is usually restricted to few thousand samples. In this work, we modified the kernel ridge regression (KRR) training procedure to deal with large scale datasets. In addition, the basis functions in the reproducing kernel Hilbert space are defined as parameters to be also optimized during the training process. This extends the n…